285 research outputs found

    Generative Models for Multi-Illumination Color Constancy

    Get PDF

    Physics-based Shading Reconstruction for Intrinsic Image Decomposition

    Get PDF
    We investigate the use of photometric invariance and deep learning to compute intrinsic images (albedo and shading). We propose albedo and shading gradient descriptors which are derived from physics-based models. Using the descriptors, albedo transitions are masked out and an initial sparse shading map is calculated directly from the corresponding RGB image gradients in a learning-free unsupervised manner. Then, an optimization method is proposed to reconstruct the full dense shading map. Finally, we integrate the generated shading map into a novel deep learning framework to refine it and also to predict corresponding albedo image to achieve intrinsic image decomposition. By doing so, we are the first to directly address the texture and intensity ambiguity problems of the shading estimations. Large scale experiments show that our approach steered by physics-based invariant descriptors achieve superior results on MIT Intrinsics, NIR-RGB Intrinsics, Multi-Illuminant Intrinsic Images, Spectral Intrinsic Images, As Realistic As Possible, and competitive results on Intrinsic Images in the Wild datasets while achieving state-of-the-art shading estimations.Comment: Submitted to Computer Vision and Image Understanding (CVIU

    Extracción en fase sólida de β-sitosterol y α-tocoferol de destilados de aceite de girasol desodorizado utilizando zeolita desilicada

    Get PDF
    In this study, the efficiency of using zeolite-based adsorbents in a solid phase extraction (SPE) procedure of α-tocopherol and β-sitosterol isolation from Sunflower Oil Deodorizer Distillate (SuDOD) with­out pre-treatment was investigated. The results showed that 99.2% α-tocopherol and 97.3% β-sitosterol were suc­cessfully isolated as pure fractions from SuDOD, when desilicated ZSM-5-type zeolite (DSiZSM-5) was used as adsorbent on a SPE. A simple and rapid HPLC method for simultaneous α-tocopherol and β-sitosterol analysis was developed and validated according to AOAC guidelines. It was found that the inclusion of a DSiZSM-5 SPE step increased the precision of the α-tocopherol and β-sitosterol analysis. In conclusion, DSiZSM-5 zeo­lite was proven to be an efficient adsorbent which can be used not only for the recovery of α-tocopherol and β-sitosterol from SuDOD in industrial scale, but also in a laboratory scale clean-up method prior to the analysis of α-tocopherol and β-sitosterol.En este estudio, se investigó la eficacia del uso de adsorbentes a base de zeolita en el procedimiento de extracción en fase sólida (EFS) para el aislamiento de α-tocoferol y β-sitosterol a partir de destilados de aceites de girasol desodorizados (SuDOD) sin ningún tratamiento previo. Los resultados mostraron que el 99,2% de α-tocoferol y el 97,3% de β-sitosterol se aislaron con éxito como fracciones puras de SuDOD, cuando se usó zeolita de tipo ZSM-5 desilicado (DSiZSM-5) como adsorbente en una EFS. Se desarrolló y validó un método HPLC simple y rápido para el análisis simultáneo de α-tocoferol y β-sitosterol de acuerdo con las directrices de la AOAC. Se encontró que la inclusión del paso DSiZSM-5 EFS aumentó la precisión del análisis de α-tocoferol y β-sitosterol. En conclusión, se demostró que la zeolita DSiZSM-5 es un adsorbente eficiente que puede usarse, no solo para la recuperación de α-tocoferol y β-sitosterol de SuDOD a escala industrial, sino también en un método de limpieza a escala de laboratorio antes del análisis de α-tocoferol y β-sitosterol

    ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition

    Get PDF
    In general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.Comment: Submitted to International Journal of Computer Vision (IJCV

    Automatic generation of dense non-rigid optical flow

    Get PDF
    There hardly exists any large-scale datasets with dense optical flow of non-rigid motion from real-world imagery as of today. The reason lies mainly in the difficulty of human annotation to generate optical flow ground-truth. To circumvent the need for human annotation, we propose a framework to automatically generate optical flow from real-world videos. The method extracts and matches objects from video frames to compute initial constraints, and applies a deformation over the objects of interest to obtain dense optical flow fields. We propose several ways to augment the optical flow variations. Extensive experimental results show that training on our automatically generated optical flow outperforms methods that are trained on rigid synthetic data using FlowNet-S, PWC-Net, and LiteFlowNet. Datasets and algorithms of our optical flow generation framework is available at https://github.com/lhoangan/arap_flow.Comment: The paper is under consideration at Computer Vision and Image Understandin
    corecore